WorkServer White Paper

The GNP WorkServer

A NEBS Certified Platform for High Availability in the Central Office


Table of Contents

[
Next ]

Overview

Design Goals

NEBS Certified
Advanced OAM&P Functionality
Advanced Cable Management
Hot-Swappability
Two-Step Removal Procedure
Simple Modular Design
Complete Documentation
Open Systems
Systems-Based High Availability
Modular Upgradeability

System Details

The Shelf Unit
WorkServer Modules
CPU Module with Integrated Power Supply
Media Array Module
SerialSmart Module
Ethernet Hub Module
SBus Expander Module
SBus I/O
RAID Controller
SCSI Switch
Cooling
Intelligent Maintenance Network
Maintenance Computer and Interface
Maintenance Nodes

Technical Specifications






Overview

[
Previous ] [
Next ] [
T of C ]

The GNP WorkServer is a NEBS certified telecommunications server for High Availability applications. The WorkServer includes the processing, I/O, and media capability required for deploying advanced computing technology. By combining everything into a unified system, the WorkServer enables extremely rapid development and deployment of the next generation of telecommunications services.

Components in the WorkServer include Sun SPARC 5, 20, or new UltraSPARC motherboards with integrated -48 VDC input power supplies, half- and full-height media options in removable carriers - including DAT, disks, and CD-ROM - ethernet hubs, high-speed asynchronous serial controllers, external power supplies, SCSI switches, and hardware RAID disk controllers. The GNP WorkServer can be configured with SunOS or Solaris 2.x operating systems with a variety of hardware and software options. I/O options include X.25, ISDN, T1/E1, synchronous and asynchronous serial, FDDI, Fast Ethernet, ethernet, ATM, and SCSI. By leveraging Sun and Open Systems technology, the WorkServer delivers an extremely cost-effective solution that utilizes the latest technology for high performance and high capacity without the associated headaches of complex systems integration or proprietary lock-in.

The GNP Intelligent Maintenance NetworkTM (IMN) connects all WorkServer components via an independent network for remote monitoring and maintenance functions. All of the Modules and the IMN are integrated with a passive, customizable Midplane for device interconnects, power, and alarming. The WorkServer includes a reliable high-flow forced-air cooling system capable of cooling an entire rack of WorkServer technology.

Quickly deploying an effective and profitable telecommunications service takes more than the right hardware, it takes seamless interoperability with the existing equipment and procedures for the network. The WorkServer is designed from the ground up as a platform for reliable, highly available computing in the Central Office. This includes a full range of Operations, Administration, Maintenance, and Provisioning (OAM&P) support to minimize craft errors and enable remote and automatic maintenance and repair. This includes advanced cable management to virtually eliminate cabling errors during service or repair.

As a High Availability platform, the WorkServer provides an extremely cost-effective alternative to Fault Tolerance for services which require reliability and continuous uptime. High Availability systems use inexpensive components configured in a redundant, active configurations which continue operating in case of individual component failure. Rather than focusing on proprietary hardware for millisecond response time, High Availability creates a complete system for greater overall uptime. The WorkServer was designed to provide all of the components and interconnects for rapid and robust deployment of High Availability applications.

The WorkServer is the most complete solution for advanced computing in today's network.








Design Goals

[
Previous ] [
Next ] [
T of C ]

The WorkServer is designed to provide a robust computing platform for High Availability in the Central Office. Our goals are flawless operation in the Central Office, rapid development and deployment for new services, and reliable High Availability on a long-life platform. We do this through NEBS certification, advanced OAM&P functionality, Open Systems, systems-based High Availability, the Intelligent Maintenance Network, and modular upgradeability. The WorkServer makes it simple to develop on Sun and deploy on GNP.




NEBS Certified

[ Previous ] [
Next ] [
T of C ]

The WorkServer has been tested and certified to meet or exceed the NEBS specification.

NEBS is the Bellcore standard for survivability in the Central Office. Almost all equipment deployed in Central Offices in the United States adheres to this strict specification for Earthquake/Shock/Vibration, Temperature and Humidity, Airborne Contamination, Acoustic Noise, and Electrical Protection. The GNP WorkServer meets or exceeds the NEBS standards GR-63-CORE and GR-1089-CORE in all of these areas, including the Zone 4 Earthquake standard. This means that for applications in the Central Office, the WorkServer is guaranteed to be safe, reliable and durable operating alongside the rest of your infrastructure investment.

Several companies design for NEBS compliance. However, few actually go to the trouble and expense of verifying that their implementation actually meets the standard. The WorkServer is thoroughly tested in labs around the country, enduring the infamous Shake 'n Bake and the destructive flammability test. As a result, the WorkServer is guaranteed to provide a reliable, safe computing platform wherever it is deployed.




Advanced OAM&P Functionality

[
Previous ] [
Next ] [
T of C ]

Telecommunications companies rely on complex procedures to keep their network running smoothly. These procedures address Operations, Administration, Maintenance and Provisioning (OAM&P) for every piece of equipment deployed in their infrastructure. The WorkServer facilitates and simplifies the OAM&P procedures for computing technology in the network. We've simplified cable management, enhanced the craft interface, simplified module maintenance, and provided extensive documentation for craft procedures.

Studies on the reasons for downtime in telecommunications network provide significant justification for focusing efforts on improving operational simplicity. The primary reason for downtime in these studies was operator error. This was followed closely by software faults in both the operating system and application. With the exception of hard drive failure, hardware was found to be a nearly insignificant factor in system downtime, rating below all other possible causes. In fact, the drive failures were readily predictable and preventable through proper maintenance procedures. In light of these studies, GNP believes that while hardware Fault Tolerance addresses certain problems very well, systems which are operator-fault-tolerant provide a more effective solution for most telecommunications applications.

In response to this need for simpler OAM&P procedures, the WorkServer is built from the ground up for easy servicing by trained craftspeople.

The WorkServer includes several features which directly simplify maintenance procedures. Many of the details are addressed elsewhere in this document, including fail-safe carrier removal procedures, remote alarming and control, accessible CPU carrier design, physical keys for all carrier modules, and reduced cable management due to the Midplane and Shelf Unit design. By designing for the entire lifecycle of the product, we have significantly reduced the ongoing expense and trouble of maintaining our systems over time. The enhanced OAM&P functionality significantly reduces the risk of craft error and provides a reliable means to deploy the system in your network. Advanced cable management, hot-swappability, two-step removal procedures, simple modular design, and complete documentation all make it easier for craftspeople to manage WorkServer equipment in the field.



Advanced Cable Management

[ Previous
] [ Next
] [
T of C ]

The WorkServer has a unique Midplane design which reduces, and in some cases, eliminates front-panel cabling. I/O signals from CPU Modules, ethernet hubs, disk drives, asynchronous serial controllers, and other Modules are directed to the Midplane for routing to other components or systems. Except in special circumstances, there is no front-panel cabling required for I/O connections between WorkServer modules. This means that modules can be removed and replaced without unplugging and reconnecting cables, including ethernet, SCSI, video, T1/E1, serial, etc. The Midplane is a passive, customizable interconnect area with full support for any protocol or transmission method which runs over copper wire. Nearly all I/O cables can take advantage of the Midplane for routing. The Midplane enables the separation of I/O from the topology of device interconnects. Interconnects can be configured at the manufacturing site once and field craftspeople don't need to remember and replicate the correct topology every time they service a unit.


Hot-Swappability

[ Previous
] [
Next ] [
T of C ]

Removing a telecommunications service from operation is more than an inconvenience, it often results in a loss of revenue. To minimize both scheduled and unscheduled downtime, the WorkServer Modules are all hot-swappable. In addition to the Modules, nearly all removable components, such as disk drives, individual fans, and the separate halves of the Midplane, can be removed and replaced while the system remains operational. Hot-swappability minimizes and sometimes removes the need for system downtime for maintenance and repair.


Two-Step Removal Procedure

[ Previous
] [
Next ] [
T of C ]

A built-in two-step removal process keeps trained craftspeople from accidentally removing the wrong component. In addition to the Fatal/Warning/Application alarm indicator mounted on the front-panel of every WorkServer module, there is an LED and two buttons for implementing a request and approval mechanism for taking components out of service. After identifying the Module by location and alarm LED status, the craft must first hold down the Enable button, then press the Request Out Of Service button. This generates a message on the Intelligent Maintenance Network which is monitored by automatic or manual systems on a Maintenance Computer, usually at the Network Operations Center. If the Module is the correct component for removal, the Maintenance Computer sends a message to the Module and the LED indicates Clear To Remove From Service. This procedure keeps field personnel from accidentally removing the wrong Module.




Simple Modular Design

[ Previous
] [
Next ] [
T of C ]

The WorkServer is designed around a modular architecture which enables simple configuration, repair and stocking. Any Module can be used for any configuration in a Shelf Unit, and once configured, unique physical keys identify module types so that only the correct Module can be placed in its designated slot in a Shelf Unit. This makes it easy to design and configure a system around standard hardware components. To service a system, it is simple and straightforward to remove the Module, replace it with a spare, and then service the removed Module. Each Module also has a unique electronic key which identifies it worldwide. This key can be used for automatic inventory tracking and simplified sparing management. The WorkServer modules package current technology for reliable, robust integration with all the advantages of the WorkServer.




Complete Documentation

[
Previous ] [
Next ] [
T of C ]

The WorkServer includes extensive maintenance procedures for the WorkServer platform. Full documentation and proper training are integral to ensuring that field personnel can effectively and efficiently maintain our systems. Since many maintenance procedures vary from application to application, GNP provides custom specifications for Operations, Administration, Maintenance, and Provisioning (OAM&P). Working in conjunction with the customer, GNP's engineering services architect and document procedures for craftspeople or support staff to optimize operations.




Open Systems

[
Previous ] [
Next ] [
T of C ]

The WorkServer is built on Open Systems technology. We leverage standard off-the-shelf Sun SPARCengines and the standard Sun Solaris or SunOS operating system along with various third-party and in-house hardware and software. We package this technology in the WorkServer system to provide a reliable, NEBS certified computing solution that has all of the benefits of Open Systems plus all the functionality required for rapid and inexpensive deployment in the Central Office.

We use standard Sun motherboards and software so that system designers and software developers can develop on Sun and deploy on GNP with no modifications to the operating system, device drivers, or applications. All SBus and MBus modules are 100% compatible. Any software or hardware developed to work with or on a Sun SPARCstation will work on the GNP WorkServer, because it is a Sun. Projects which have been developed in the laboratory on Sun Microsystems can be immediately deployed in the field on the WorkServer.

By relying on Sun and other Open Systems vendors, the WorkServer takes advantage of the latest technological advances for improving the performance and capacity of the computing system. In addition, because most off-the-shelf technology is produced in high volume, reliability figures are much more accurate and quality control is typically of much higher caliber. The WorkServer provides the most advanced technology using the most reliable components available.

The combination of Open Systems technology, NEBS certification, and our own enhanced OAM&P features combine to dramatically reduce time-to-market and increase system reliability.




Systems-Based High Availability

[ Previous ] [
Next
] [
T of C ]

The WorkServer is based on the premise of system reliability for maximum uptime in the telecommunications network. It enables High Availability and Continuous Availability services for an extremely cost-efficient alternative to expensive hardware Fault Tolerant systems.

Systems-based High Availability is based on the reliability of the entire system, not individual parts. Instead of focusing technology on the problems of millisecond hardware Fault Tolerance - where the CPU is guaranteed 99.999% uptime - we put our efforts into addressing uptime for the entire system, including the CPU, hard drives, RAID controllers, ethernet hubs, asynchronous communications controllers, SCSI switches, and more. Because we build for system reliability, we can take advantage of the performance, price, and reliability of standard off-the-shelf devices and keep the development effort based on non-proprietary open technology.

Systems-based High Availability is built using active, redundant systems with multiple components and I/O paths in case of failure. Utilizing middleware that operates transparently and resides on top of the operating system and below the application software, High Availability systems automatically failover when certain event thresholds are exceeded. For example, if a particular CPU, disk drive, or process does not respond during a specified interval then the system will shut down the component in question and revert to a known state of wellness - using a redundant component - and continue operation.

Unlike Fault Tolerance, High Availability can be implemented without modifying the application software, the OS, or the drivers. A layer of middleware operates transparently to continually monitor the active system. If a problem should occur, automatic failover procedures are initiated to properly shut down the failing system, bringing up the standby system to replace it. In addition, because the High Availability functionality resides in the software layer, it is simple and straightforward to extend the High Availability deeper into the application when needed.

High Availability offers significant advantages over Fault Tolerance for particular applications. Typically, proprietary Fault Tolerant architectures require rewriting working code which has been developed on platform in the lab. The code must be broken into segments so that stops can be inserted and the hardware can verify that the multiple CPUs are still operating in lock-step. Each of these stops causes a significant decrease in performance and as the system approaches true Fault Tolerance, the performance continues to radically decrease. Furthermore, by introducing significant code modifications, porting to a Fault Tolerant architecture essentially means breaking the application to squeeze into a new box. The delay in time and the risk of adding new bugs during this process represents a significant expense and delay for deploying on hardware fault-tolerant platforms. The WorkServer enables immediate operation of working code developed on the industry standard open technology of Sun Microsystems. With the relatively plentiful supply of talented and experienced software engineers familiar with Sun technology, deployment on the WorkServer removes the cost of maintaining a core of expensive experts capable of programming proprietary hardware platforms.

Fault Tolerance is suitable - and sometimes necessary - for certain applications. In practice, however, the benefits in performance, time-to-market, and long-term cost of ownership of an Open Systems High Availability implementation make High Availability on the WorkServer the superior solution for many telecommunications applications.




Modular Upgradeability

[ Previous ] [
Next ] [
T of C ]

The WorkServer is a long-life platform for computing in the Central Office. Compute, I/O and media storage technology advance at a rapid pace and telecommunications services must be able to evolve to remain competitive. To address the need to keep computing systems up-to-date, every Module in the WorkServer is independently plug-upgradeable. For example, to upgrade from 2GB hard drives to 4GB, simply remove the old drive according to documented procedures and install a new media carrier with a 4GB drive. When using RAID or other redundant technology, these upgrades can be performed on active systems without downtime. This is also true of processing technology. Modular upgradeability makes it easy to install new technology as soon as it reaches the marketplace and immediately take advantage of better performance, capacity, and price.

With plug-upgradeability for all Modules, the WorkServer provides the platform for High Availability computing in the Central Office for the next decade.





System Details

[
Previous ] [ Next
] [
T of C ]

The WorkServer is a complete system with all of the components required for rapid deployment of new telecommunications services. Various configurations support a wide variety of applications. The building blocks of these applications are all the same: Shelf Unit, Modules, and Fan Unit. The Shelf Unit provides power, device interconnects, and physically houses all of the Modules. The Modules provide specific functionality such as processing, I/O and data storage. The Fan Unit provides the cooling for the entire system.




The Shelf Unit

[ Previous ] [
Next
] [
T of C ]

Every WorkServer has at least one Shelf Unit and one Fan Unit. The Shelf Unit houses the WorkServer Modules such as CPUs, Media, SBus Expanders, and RAID Controllers. The Shelf Unit is the key to the enhanced OAM&P functionality such as advanced cable management, remote maintenance, and alarming.

Every Shelf Unit contains a Midplane which is the distribution mechanism for power, data I/O, and the Intelligent Maintenance Network. All WorkServer Modules, e.g., the CPU Module, Media Array Module, and I/O Module, vertically mount inside in the Shelf Unit, plugged into the Midplane. All modules are hot-swappable with visible and audible local and remote alarming capability.

The back of the Midplane is a patch panel used for interconnecting data signals from one Module to another. The patch panel enables reconfiguration without requiring a new midplane design for each application. Furthermore, isolated components, such as a CPU Module, a fan, or an individual media drive on a Media Array Module, can be replaced or interchanged without recabling. This advanced cable management significantly reduces the risk of craft error when performing maintenance procedures.

It is important to note that unlike traditional Backplane technologies, the Midplane is a passive, customizable patch-panel for power and I/O interconnects. There is no fixed bandwidth or topology, nor are there limitations on the protocols or interfaces which can be used to transfer information. As a result, the Midplane can be uniquely configured to provide the most appropriate data channels for any particular application.

The Midplane can be separated into two symmetrical and electrically isolated half-midplanes. The half-midplanes are independently field-replaceable and hot-swappable. The Midplane has no active components, which nearly eliminates the possibility of electronic failure. Even in this unlikely case, the hot-swappable design enables replacements of one half-midplane while the other remains operational. In a mirrored redundant system, this enables complete redundancy, down to the level of the individual Shelf Unit.

The standard Shelf Unit for a EIA 19" rack would hold a fourteen (14) slot Midplane. A 24" rack has 18 slots and a 27" Telco rack has a twenty (20) slot Midplane. Each Module populates a given number of slots as is indicated in the following sections.




WorkServer Modules

[ Previous ] [
Next ] [
T of C ]

The GNP WorkServer is based upon a modular system which can be configured for a variety of applications within the same hardened Shelf Unit. This allows great flexibility while retaining the benefits of extremely tight integration, including the Intelligent Maintenance Network. Components such as CPU Modules, Media Array Modules, and I/O Modules, all leverage the same interface, power, and alarming technology, allowing them to seamlessly integrate with the highest level of reliability.

The Modules route power and maintenance signals to the Midplane. Also, the data for all Modules is routed to the Midplane patch panel for distribution to other Modules. This virtually eliminates external cables, which greatly facilitates regular maintenance and increases system reliability.

Any Module can occupy any slot in the Shelf Unit. This flexibility allows for a variety of configurations, supporting both CPU-intensive applications as well as systems which require large amounts of disk storage or I/O. For example, slots one through four can be allotted for a CPU Module, and slots five, six, seven and eight could hold Media Array Modules.

However, every Module has a unique physical key, assigned to its part type in each particular configuration. This means that only the appropriate Module can be plugged into a particular slot once the system is configured. This ensures that craftspeople only plug the correct Module into the correct slot.

All Modules have their own power supplies and fuses. This keeps power supply problems distributed and easily replaced, additionally removing the possibility that craftspeople will replace the wrong fuse in a centrally located fuse box. All Modules are hot-swappable and plug-replaceable.




CPU Module with Integrated Power Supply

[
Previous ] [
Next ] [
T of C ]

The CPU Module is a vertically mounted carrier that holds a Sun SPARC 5, SPARC 20, Ultra 1, Ultra 2, or Ultra AX SPARCengine. All MBus modules and SBus slots are available and fully supported. A fully configured system might use two MBus modules, three SBus cards, and an SBus expander that can provide an additional six SBus slots. The CPU Module is hot-swappable and plug-replaceable.

The CPU Module also has allotted space for one half-height SCSI device and floppy drive, or one full-height SCSI device.

Most importantly, almost all data coming into or going out of the Sun SPARCengine, such as ethernet, SCSI, and asynchronous RS-232 and RS-422, is routed through the Midplane to the patch panel (on the back of the Midplane) for distribution to any selected Module.

Once again, in most cases, this eliminates almost all external cabling. Depending upon the application, however, there are certain configurations that might require external cabling.

When removed from the Shelf Unit, the CPU Module also allows direct access to all motherboard components. This allows simple repairs and installation of memory, SBus and MBus modules, as well as the boot disk or other media devices on the carrier. This greatly facilitates field repair procedures.

The CPU Module also houses the power supply, which converts dual-feed -48 volt direct current into +5 VDC and ±12 VDC for the CPU Module and onboard disk drives. Its dual-feed design provides a robust power source for the entire CPU Module, including boot drive and floppy.

The integrated Maintenance Node provides direct feedback to local and remote users about current conditions with alarms for under/over voltage, vibration, and temperature extremes. Alarm signals can trigger local and remote actions including lighting LEDs, emitting audio alerts, power cycling local units, and engaging software response for fail-over control and system shutdown.

The SPARC 5 and 20 CPU Modules occupy four slots on the Shelf Unit. The UltraSPARC CPU Modules occupy five slots.




Media Array Module

[
Previous ] [ Next
] [
T of C ]

The Media Array Module houses various SCSI media storage devices. The 3x3 Media Array Module can accommodate up to three 3.5" full-height devices and the 5+3 Media Array Module can support one 5.25" and one 3.5" full-height device. Each SCSI drive is mounted on its own removable carrier with a dedicated power supply and alarming circuitry. The media carriers plug into the Media Array Module, which is a single vertically mounted module.

These hot-swappable media units can be replaced or upgraded without disabling the other units in the Module or the CPU Module controlling the device. Any standard SCSI device can be mounted for use in the Module, including disk drives, 4mm DAT drives, CD-ROMs, floppy drives, and others.

Each media carrier that holds a SCSI device utilizes an intelligent, fail-safe two-step request and release mechanism to prevent accidental removal. Under normal operation, the carrier is removed only after the CPU has properly dismounted the drive and the craftsperson has followed the craft-tolerant process for removal. This procedure greatly reduces the chance of incidental downtime due to craft error.

A Media Array Module occupies two slots in the Shelf Unit.



SerialSmart Module

[
Previous ] [ Next
] [
T of C ]

GNP's commercial multiport serial product has been implemented in the WorkServer. SerialSmart is a high-performance asynchronous serial communications controller. There are a variety of configurations which provide from 16 to 64 asynchronous ports per SBus card. All ports have full hardware and software flow control and individual ports can be run at up to 232.2 Kbaud. We also support Y-connected configurations for redundant I/O paths.

The SerialSmart Module occupies one slot in the Shelf Unit.



Ethernet Hub Module

[ Previous ] [
Next ] [
T of C ]

Twenty-four-Port Ethernet hubs can be added on single cards and plugged into the Midplane. Eight ports are dedicated to the front panel and sixteen are dedicated to the Midplane. All monitoring and remote power cycling is supported, including SNMP.



SBus Expander Module

[
Previous ] [
Next ] [
T of C ]

The SBus Expander Module enables six additional SBus slots for use with a CPU Module. It uses one SBus on the host machine, for a net gain of 5 SBus slots. All six SBus slots provide for DVMA master support.



SBus I/O

[
Previous ] [
Next ] [
T of C ]

The WorkServer allows full use of all SBus slots on CPU modules, with 100% compatibility for all SBus I/O products available for Sun Workstations. Additional I/O options include FDDI, CDDI, Fast Ethernet, T1/E1 crossconnects, HSI, Datakit, X.25, ISDN, T3, and ATM, as well as connectivity to legacy systems.

I/O Modules occupy one to three slots in the Shelf Unit.



RAID Controller Module

[ Previous
] [
Next ] [
T of C ]

The WorkServer's RAID controller provides hardware RAID for disks in Media Array Modules. RAID stands for Redundant Array of Inexpensive Disks and provides an elegant solution for providing reliable data storage for any application.



SCSI Switch

[
Previous ] [
Next
] [
T of C ]

The SCSI Switch provides multiple, redundant SCSI connections between two host computers and two chains of SCSI devices. This enables a secondary computer to access the same data as the primary computer in case of host failure. The SCSI Switch supports fast SCSI. When the primary computer is shut down, the SCSI Switch automatically redirects I/O control to the secondary machine.




Cooling System

[
Previous ] [
Next ] [
T of C ]

The WorkServer's cooling system provides extremely robust and effective cooling for an entire rack of WorkServer technology. A single Fan Unit can provide cooling for up to three Shelf Units, with capacity to cool the system even in case of fan failure.

For a 48 cm/19" Shelf Unit, the cooling system utilizes a two- or four-fan Fan Unit. For a 60 cm/24" or 68 cm/27" Shelf Unit, it uses a three- or six-fan Fan Unit. Four/Six fan configurations are used only when the fan is placed mid-rack, where two/three fans blow upward, and the other two/three blow downward. Each fan in the Fan Unit runs directly from -48 VDC and provides 241 cubic feet per minute (CFM) airflow with zero backpressure, or an estimated 181 CFM with 0.25" H2O backpressure (typical). As a result, even in a worst case scenario with 6 SPARC 20 CPU Modules dissipating 1.8 kWatts, cooled by 2 fans, the change in temperature for air passing through the system is a mere 7.0° C, or in the case of the failure of one fan unit, 14.1°C. In a typical high availability configuration, with 2 CPUs and disk drives, there is only a 2.1°C change, or 4.1°C in case of failure.

These Fan Units are extremely quiet with an absolute decibel rating of only 55 dBA per fan, ensuring smooth, quiet operation. In case of failure, a baffle closes the fan opening to keep air from leaking out. This ensures that the remaining fans will continue to operate effectively as a forced-air cooling system. An alarm will also be posted via the Intelligent Maintenance Network.




Intelligent Maintenance Network

[
Previous ] [ Next
] [
T of C ]

The Intelligent Maintenance Network provides all of the alarm and maintenance functionality for telecommunications applications, as well as many enhancements. This includes alarm monitors for under and over voltages, over-current, temperature, vibration, and component failure anywhere in the system, including the Fan Units and fuses. Power cycling, device configuration, and direct access to OpenBoot through ttya are all managed through this network.

Integrated in the WorkServer Shelf Unit is an internal local operating network that connects all Modules. This Maintenance Network is completely isolated from all other data circuitry. Using integrated Maintenance Nodes, all CPUs, media drives, and peripheral devices can be monitored by Maintenance Computers in real time. This enables remote and onsite staff to immediately identify problems for quick resolution and repair. In the case of component failure, remote operations control centers can always get the current state of all components in the WorkServer system.

A straightforward command line interface composed of a few commands and parameters allows the user direct access to the maintenance network. For example,

CFG-PWR::005::ILIM=12.3A

will configure power supply number five to limit current at 12.3 amps.

The Intelligent Maintenance Network provides a single interface to all components in the WorkServer system for extremely powerful, yet simple control over all maintenance and service functions from a remote site.



Maintenance Computer and Interface

[
Previous ] [
Next ] [
T of C ]

Maintenance Computers connect to the network and receive information from any Maintenance Node connected to the network. This computer can be a Sun SPARCstation or PC compatible running GNP Maintenance Software with direct control over all devices in the network, or it could be a remote user accessing the network via a modem, with the full capabilities of a local console.

Remote and local computers can be used simultaneously or independently and can be configured for various levels of access control. For instance, a local machine may be configured to automatically shut down or reconfigure due to environmental conditions, while a dial-in user can be restricted to monitoring functions only until authenticated through a dial-back mechanism. A freely configurable maintenance network allows the system to be designed for the best implementation for each particular application.



Maintenance Nodes

[
Previous ] [ Next
] [
T of C ]

Built into each WorkServer Module is a Maintenance Node. This node is a state machine that monitors local alarms, front panel switches, and environmental sensors. When an error or fault is detected, within milliseconds it transmits messages which are received by active Maintenance Computers. If a Maintenance Computer does not respond to an alarm notification, the Maintenance Node will continue to transmit messages in specific intervals until receiving acknowledgement from a Maintenance Computer. The alarm status is also reflected by the LEDs on the front panel, which will remain in alarmed state until the condition which generated the alarm is resolved and the alarm status cleared. This assists craftspeople in identifying, locating, and fixing the problem. Front-panel LEDs indicate Fatal, Warning, and Application level alarms.

Each GNP Module contains an integrated Maintenance Node with full alarm and network capabilities. In addition, we can connect Maintenance Nodes to third party devices such as routers and modem banks which have an RS-232 monitor port.

Using the Maintenance Node, each Module can be power cycled on or off -- either onsite, or by remote access. Furthermore, each Module has a unique electronic serial number to identify itself. This can be used to verify proper component repair and maintenance procedures or for an automated inventory system for simplified stocking and tracking of available spares and replacement parts. In addition, there are physical location IDs so that software and remote craft can easily detect the exact location of a module when reporting alarms. All maintenance firmware is stored in flash memory that can be rewritten locally or remotely for future expansion or customization of maintenance capabilities.



Maintenance Node Functions [ Previous
] [
Next ] [
T of C ]






Technical Specifications

[
Previous ] [
T of C ]

Shelf Unit

48 cm/19" (14 slots) x 18 su (1 su = 25 mm high)
60 cm/24" (18 slots) x 18 su
68 cm/27" (20 slots) x 18 su
each slot 30 mm x 450 mm x 300 mm
Midplane cable space: 95 mm x 450 mm x Shelf Unit Width
Customizable power segmentation
Support for dual-feed A/B power input
Integrated Intelligent Maintenance Network
Power Dissipation
Max. 80 watts per 30 mm slot
Max. 1,200 watts per 48 cm (19") Shelf Unit
Max. 1,400 watts per 60 cm (24") Shelf Unit
Max. 1,600 watts per 68 cm (27") Shelf Unit

Fan Unit

48 cm (19"): two fans up and two fans down
60 cm (24") and 68 cm (27"): three fans up and three fans down
240/300 CFM per fan
Filtered Airflow
Fans and filters are hot-replaceable
26 cm (10.5") high x Shelf Unit Width

Operational Environment

Capable of Zone 4 earthquake operation
Operating Temperature:
39°C continuous
50°C short-duration
Nominal input power: -48 VDC
Short-term input power: -36 VDC to -60 VDC

Racks Supported: EIA 19", EIA 24", and customer-specific

Module Name Slots Description Options
CPU Module
4
5
SPARCengine 5 or 20
UltraSPARC
1, 2, or AX
  • All MBus Processor Modules - 50 MHz to 167 MHz
  • RAM: SE5: 256 MB, SE20: 512 MB; Ultra: 1 GB
  • I/O: All SBus cards plus motherboard options Boot Disk, Floppy Disk, CD-ROM
RAID Module 1 Hardware RAID Controller
  • RAID 0, 1, 1+0, 4, and 5
  • Up to 18MBytes/s host access
  • Controls devices in 3x3 and 5+3 Media Modules
3x3 Media Module 2 Removable Media Array Module
  • Three 3.5" full-height SCSI devices
5+3 Media Module 2 Removable Media Array Module
  • One 5.25" half-height SCSI device and one 3.5" full-height SCSI device
Media Carriers - For use in Media Modules
  • Hard Disk: up to 8 GB
  • 4 mm DAT
  • CD-ROM
  • SCSI Floppy
SBus Expander 3 6 for 1 SBus Expansion Module
  • Support for DMA master in all six SBus slots
SerialSmart(TM) 1 SBus Asynchronous Serial Controller
  • 16, 32, or 64 ports, up to 230kbps per port
Y-Connect Interface 1 SerialSmart Y-Connect Module
  • Provides redundant serial I/O connections
SCSI Switch 1 Electrical Switch for SCSI Devices
  • 2 x 2 connections - 2 Host, 2 Device Connections
Intelligent Power Supply 1 Intelligent power source for external devices (e.g., modems, terminals, etc.)
  • Up to 12 external programmable power taps
Ethernet Module 1 Integrated Ethernet Hub
  • 24 ports, 10Base-T
  • 8 front-panel-accessible
  • 16 Midplane-accessible
  • SNMP Support

[ Previous ] [
T of C ]


For more information about the WorkServer and other GNP Computers products and news, please contact us or visit our Website at http://www.gnp.com.


WorkServer Photos | WorkServer Brochure | WorkServer White Paper

High Availability White Paper | Technical Specifications | More Info


[ Up ][ Home ][ Index ][ Feedback
] [ Contact
]

Copyright © 1996 GNP Computers, Inc.